自2016年成立以来,Alexa奖计划使数百名大学生能够通过Socialbot Grand Challenge探索和竞争以发展对话代理商。挑战的目的是建立能够与人类在流行主题上连贯而诱人的代理人20分钟,同时达到至少4.0/5.0的平均评分。但是,由于对话代理商试图帮助用户完成日益复杂的任务,因此需要新的对话AI技术和评估平台。成立于2021年的Alexa奖Taskbot Challenge建立在Socialbot Challenge的成功基础上,通过引入交互式协助人类进行现实世界烹饪和做自己动手做的任务的要求,同时同时使用语音和视觉方式。这项挑战要求TaskBots识别和理解用户的需求,识别和集成任务和域知识,并开发新的方式,不分散用户的注意力,而不必分散他们的任务,以及其他挑战。本文概述了Taskbot挑战赛,描述了使用Cobot Toolkit提供给团队提供的基础架构支持,并总结了参与团队以克服研究挑战所采取的方法。最后,它分析了比赛第一年的竞争任务机器人的性能。
translated by 谷歌翻译
A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. We unfold the design of SCFL in three steps. \emph{i) Stable social cluster formation. Considering users' heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping}. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants' model updates depending on social trust degrees. iii) Distributed convergence}. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.
translated by 谷歌翻译
We propose an analysis in fair learning that preserves the utility of the data while reducing prediction disparities under the criteria of group sufficiency. We focus on the scenario where the data contains multiple or even many subgroups, each with limited number of samples. As a result, we present a principled method for learning a fair predictor for all subgroups via formulating it as a bilevel objective. Specifically, the subgroup specific predictors are learned in the lower-level through a small amount of data and the fair predictor. In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors. We further prove that such a bilevel objective can effectively control the group sufficiency and generalization error. We evaluate the proposed framework on real-world datasets. Empirical evidence suggests the consistently improved fair predictions, as well as the comparable accuracy to the baselines.
translated by 谷歌翻译
大型,注释的数据集在医学图像分析中不广泛使用,这是由于时间,成本和标记大型数据集相关的挑战。未标记的数据集更容易获取,在许多情况下,专家可以为一小部分图像提供标签是可行的。这项工作提出了一个信息理论的主动学习框架,该框架可以根据评估数据集中最大化预期信息增益(EIG)来指导未标记池的最佳图像选择。实验是在两个不同的医学图像分类数据集上进行的:多类糖尿病性视网膜病变量表分类和多级皮肤病变分类。结果表明,通过调整EIG来说明班级不平衡,我们提出的适应预期信息增益(AEIG)的表现优于几个流行的基线,包括基于多样性的核心和基于不确定性的最大熵抽样。具体而言,AEIG仅占总体表现的95%,只有19%的培训数据,而其他活跃的学习方法则需要约25%。我们表明,通过仔细的设计选择,我们的模型可以集成到现有的深度学习分类器中。
translated by 谷歌翻译
具有深层神经网络的图像分类使技术突破激增,在面部识别,医学成像和自动驾驶等领域具有有希望的应用。然而,在工程问题中,例如发动机喷油器喷雾剂或身体油漆喷雾剂的高速成像,深度神经网络面临着与充足和多样性数据的可用性有关的根本挑战。通常,只有数千甚至数百个样本可供培训。此外,不同喷雾类之间的过渡是连续体,需要高水平的域专业知识来准确标记图像。在这项工作中,我们使用混音作为一种系统地处理工业喷雾应用中发现的数据稀缺和模棱两可的类界限的方法。我们表明,数据增强可以减轻小型数据集上大型神经网络的过度问题,但无法从根本上解决该问题。我们讨论了不同类别的凸线性插值如何自然与应用程序中不同类别之间的连续过渡保持一致。我们的实验表明,混合是一种简单而有效的方法,可以用仅几百个样品训练准确,坚固的深神网络分类器。
translated by 谷歌翻译
免疫组织化学染色图像的可靠定量分析需要准确稳健的细胞检测和分类。最近的弱监督方法通常估计细胞识别的概率密度图。但是,在密集的细胞场景中,由于无法找到通用参数设置,因此可以通过预处理和后处理受到限制。在本文中,我们引入了一个端到端框架,该框架应用了预设锚点的直接回归和分类。具体而言,我们提出了一种锥体特征聚合策略,可以同时组合低级特征和高级语义,该策略为我们的纯粹基于点的模型提供了准确的细胞识别。此外,优化的成本功能旨在通过匹配地面真相和预测点来调整我们的多任务学习框架。实验结果证明了所提出的方法的卓越准确性和效率,这揭示了辅助病理学家评估的很大潜力。
translated by 谷歌翻译
核分型是评估染色体异常可能存在的重要程序。但是,由于非刚性性质,染色体通常在微观图像中弯曲,这种变形形状阻碍了细胞遗传学家的染色体分析。在本文中,我们提出了一个自我发项的指导框架,以消除染色体的曲率。提出的框架提取空间信息和本地纹理,以在回归模块中保留带模式。借助弯曲染色体的互补信息,改进模块旨在进一步改善细节。此外,我们提出了两个专用的几何约束,以维持长度并恢复染色体的变形。为了训练我们的框架,我们创建一个合成数据集,其中通过网格变形从现实世界的直染色体生成弯曲的染色体。定量和定性实验是对合成和现实世界数据进行的。实验结果表明,我们所提出的方法可以有效拉直弯曲的染色体,同时保持带的细节和长度。
translated by 谷歌翻译
促销活动在电子商务平台上变得更加重要和普遍,以吸引客户和提升销售。但是,推荐系统中的点击率(CTR)预测方法无法处理此类情况,因为:1)他们无法概括为服务,因为在线数据分布是不确定的,因为可能正在推出的促销潜在的促销; 2)在不够重视方案信号的情况下,它们无法学习在每个场景中共存的不同特征表示模式。在这项工作中,我们提出了方案自适应混合的专家(相同),这是一个简单而有效的模型,用于促销和正常情况。从技术上讲,它通过采用多个专家来学习专家来遵循专家混合的想法,这些特征表示通过注意机制通过特征门控网络(FGN)进行调制。为了获得高质量的表示,我们设计了一个堆叠的并行关注单元(SPAU),以帮助每个专家更好地处理用户行为序列。为了解决分布不确定性,从时间序列预测的角度精确地设计了一组场景信号,并馈入FGN,其输出与来自每个专家的特征表示连接,以学会注意。因此,特征表示的混合是自适应的场景和用于最终的CTR预测。通过这种方式,每个专家都可以学习鉴别的表示模式。据我们所知,这是第一次推广感知CTR预测的研究。实验结果对现实世界数据集验证了同一的优势。在线A / B测试也表现出同样的促销期间在CTR上的显着增益和5.94%的IPV,分别在正常日内为3.93%和6.57%。
translated by 谷歌翻译
我们推出了元学学习算法概括性的新信息 - 理论分析。具体地,我们的分析提出了对传统学习 - 学习框架和现代模型 - 不可知的元学习(MAML)算法的通用理解。此外,我们为MAML的随机变体提供了一种数据依赖的泛化,这对于深入的少量学习是不受空置的。与以前的范围相比,依赖于梯度方形规范的界限,对模拟数据和众所周知的少量射击基准测试的经验验证表明,我们的绑定是大多数情况下更紧密的级。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译